Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add filters

Database
Language
Document Type
Year range
1.
IEEE Pervasive Computing ; 2022.
Article in English | Scopus | ID: covidwho-1731035

ABSTRACT

While current meeting tools are able to capture key analytics (e.g., transcript and summarization), they do not often capture nuanced emotions (e.g., disappointment and feeling impressed). Given the high number of meetings that were held online during the COVID-19 pandemic, we had an unprecedented opportunity to record extensive meeting data with a newly developed meeting companion application. We analyzed 72 h of conversations from 85 real-world virtual meetings and 256 self-reported meeting success scores. We did so by developing a deep-learning framework that can extract 32 nuanced emotions from meeting transcripts, and by then testing a variety of models predicting meeting success from the extracted emotions. We found that rare emotions (e.g., disappointment and excitement) were generally more predictive of success than more common emotions. This demonstrates the importance of quantifying nuanced emotions to further improve productivity analytics, and, in the long term, employee well-being. IEEE

2.
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 ; : 2862-2873, 2021.
Article in English | Scopus | ID: covidwho-1678733

ABSTRACT

The automated transcription of spoken language, and meetings, in particular, is becoming more widespread as automatic speech recognition systems are becoming more accurate. This trend has significantly accelerated since the outbreak of the COVID-19 pandemic, which led to a major increase in the number of online meetings. However, the transcription of spoken language has not received much attention from the NLP community compared to documents and other forms of written language. In this paper, we study a variation of the summarization problem over the transcription of spoken language: given a transcribed meeting, and an action item (i.e., a commitment or request to perform a task), our goal is to generate a coherent and self-contained rephrasing of the action item. To this end, we compiled a novel dataset of annotated meeting transcripts, including human rephrasing of action items. We use state-of-the-art supervised text generation techniques and establish a strong baseline based on BART and UniLM (two pretrained transformer models). Due to the nature of natural speech, language is often broken and incomplete and the task is shown to be harder than an analogous task over email data. Particularly, we show that the baseline models can be greatly improved once models are provided with additional information. We compare two approaches: one incorporating features extracted by coreference-resolution. Additional annotations are used to train an auxiliary model to detect the relevant context in the text. Based on the systematic human evaluation, our best models exhibit near-human-level rephrasing capability on a constrained subset of the problem. © 2021 Association for Computational Linguistics

SELECTION OF CITATIONS
SEARCH DETAIL